39 research outputs found
Documenting use cases in the affective computing domain using Unified Modeling Language
The study of the ethical impact of AI and the design of trustworthy systems
needs the analysis of the scenarios where AI systems are used, which is related
to the software engineering concept of "use case" and the "intended purpose"
legal term. However, there is no standard methodology for use case
documentation covering the context of use, scope, functional requirements and
risks of an AI system. In this work, we propose a novel documentation
methodology for AI use cases, with a special focus on the affective computing
domain. Our approach builds upon an assessment of use case information needs
documented in the research literature and the recently proposed European
regulatory framework for AI. From this assessment, we adopt and adapt the
Unified Modeling Language (UML), which has been used in the last two decades
mostly by software engineers. Each use case is then represented by an UML
diagram and a structured table, and we provide a set of examples illustrating
its application to several affective computing scenarios.Comment: 8 pages, 5 figures, 2 table
Facial Emotional Classifier For Natural Interaction
The recognition of emotional information is a key step toward giving computers the ability to interact more naturally and intelligently with people. We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of a set of characteristic facial points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles in the eyebrow and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a database of more than 1500 images. The system has been integrated in a 3D engine for managing virtual characters, allowing the exploration of new forms of natural interaction
OverNet: Lightweight Multi-Scale Super-Resolution with Overscaling Network
Super-resolution (SR) has achieved great success due to the development of
deep convolutional neural networks (CNNs). However, as the depth and width of
the networks increase, CNN-based SR methods have been faced with the challenge
of computational complexity in practice. Moreover, most of them train a
dedicated model for each target resolution, losing generality and increasing
memory requirements. To address these limitations we introduce OverNet, a deep
but lightweight convolutional network to solve SISR at arbitrary scale factors
with a single model. We make the following contributions: first, we introduce a
lightweight recursive feature extractor that enforces efficient reuse of
information through a novel recursive structure of skip and dense connections.
Second, to maximize the performance of the feature extractor we propose a
reconstruction module that generates accurate high-resolution images from
overscaled feature maps and can be independently used to improve existing
architectures. Third, we introduce a multi-scale loss function to achieve
generalization across scales. Through extensive experiments, we demonstrate
that our network outperforms previous state-of-the-art results in standard
benchmarks while using fewer parameters than previous approaches.Comment: 10 pages, 4 figures, conference, accepted by WACV202
Affective computing in a T-Learning application
This paper presents T-EDUCO, the first t-learning affective aware tutoring tool. T-EDUCO goes further than simply broadcasting an interactive educational application by allowing the figure of a tutor to be present and to govern the students’ learning process. The tutor can access academic and emotional information about the students through a continuous “emotional path” that includes timestamps and information about the progress made in each exercise. In this way, personal messages or extra educational contents for improving learning can be sent to the students. All this is made possible by a combination of broadcast and broadband technologiesFacultad de Informátic
Towards Children-Centred Trustworthy Conversational Agents
Conversational agents (CAs) have been increasingly used in various domains, including education, health and entertainment. One of the growing areas of research is the use of CAs with children. However, the development and deployment of CAs for children come with many specific challenges and ethical and social responsibility concerns. This chapter aims to review the related work on CAs and children, point out the most popular topics and identify opportunities and risks. We also present our proposal for ethical guidelines on the development of trustworthy artificial intelligence (AI), which provide a framework for the ethical design and deployment of CAs with children. The chapter highlights, among other principles, the importance of transparency and inclusivity to safeguard user rights in AI technologies. Additionally, we present the adaptation of previous AI ethical guidelines to the specific case of CAs and children, highlighting the importance of data protection and human agency. Finally, the application of ethical guidelines to the design of a conversational agent is presented, serving as an example of how these guidelines can be integrated into the development process of these systems. Ethical principles should guide the research and development of CAs for children to enhance their learning and social development
Emotional facial expression classification for multimodal user interfaces
Abstract. We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of 10 characteristic points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a data-base of 399 images. For the moment, the method is applied to static images. Application to sequences is being now developed. The extraction of such information about the user is of great interest for the development of new multimodal user interfaces. Keywords. Facial Expression, Multimodal Interface.
Region-based facial representation for real-time Action Units intensity detection across datasets
International audienc
Facial Emotional Classifier For Natural Interaction
The recognition of emotional information is a key step toward giving computers the ability to interact more naturally and intelligently with people. We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of a set of characteristic facial points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles in the eyebrow and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a database of more than 1500 images. The system has been integrated in a 3D engine for managing virtual characters, allowing the exploration of new forms of natural interaction